On Sketching Matrix Norms and the Top Singular Vector

نویسندگان

  • Yi Li
  • Huy L. Nguyen
  • David P. Woodruff
چکیده

Sketching is a prominent algorithmic tool for processinglarge data. In this paper, we study the problem of sketchingmatrix norms. We consider two sketching models. The firstis bilinear sketching, in which there is a distribution overpairs of r×n matrices S and n× s matrices T such that forany fixed n×n matrix A, from S ·A ·T one can approximate‖A‖p up to an approximation factor α ≥ 1 with constantprobability, where ‖A‖p is a matrix norm. The second isgeneral linear sketching, in which there is a distribution overlinear maps L : R2→ R, such that for any fixed n × nmatrix A, interpreting it as a vector in R 2, from L(A) onecan approximate ‖A‖p up to a factor α.We study some of the most frequently occurring matrixnorms, which correspond to Schatten p-norms for p ∈{0, 1, 2,∞}. The p-th Schatten norm of a rank-r matrixA is defined to be ‖A‖p = ( ∑ri=1 σ pi )1/p, where σ1, . . . , σrare the singular values of A. When p = 0, ‖A‖0 is defined tobe the rank of A. The cases p = 1, 2, and ∞ correspond tothe trace, Frobenius, and operator norms, respectively. Forbilinear sketches we show: 1. For p = ∞ any sketch must have r · s = Ω(n2/α4)dimensions. This matches an upper bound of Andoniand Nguyen (SODA, 2013), and implies one cannotapproximate the top right singular vector v of A bya vector v′ with ‖v′ − v‖2 ≤ 12 with r · s = õ(n2). 2. For p ∈ {0, 1} and constant α, any sketch must haver · s ≥ n1− dimensions, for arbitrarily small constant> 0. 3. For even integers p ≥ 2, we give a sketch with r ·s = O(n2−4/p −2) dimensions for obtaining a (1 + )-approximation. This is optimal up to logarithmicfactors, and is the first general subquadratic upperbound for sketching the Schatten norms. For general linear sketches our results, though not optimal,are qualitatively similar, showing that for p = ∞, k =Ω(n3/2/α4) and for p ∈ {0, 1}, k = Ω(√n). These giveseparations in the sketching complexity of Schatten-p normswith the corresponding vector p-norms, and rule out a tablelookup nearest-neighbor search for p = 1, making progresson a question of Andoni.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Novel Noise Reduction Method Based on Subspace Division

This article presents a new subspace-based technique for reducing the noise of signals in time-series. In the proposed approach, the signal is initially represented as a data matrix. Then using Singular Value Decomposition (SVD), noisy data matrix is divided into signal subspace and noise subspace. In this subspace division, each derivative of the singular values with respect to rank order is u...

متن کامل

A Novel Noise Reduction Method Based on Subspace Division

This article presents a new subspace-based technique for reducing the noise of signals in time-series. In the proposed approach, the signal is initially represented as a data matrix. Then using Singular Value Decomposition (SVD), noisy data matrix is divided into signal subspace and noise subspace. In this subspace division, each derivative of the singular values with respect to rank order is u...

متن کامل

On the singular fuzzy linear system of equations

The linear system of equations Ax = b where A = [aij ] in Cn.n is a crispsingular matrix and the right-hand side is a fuzzy vector is called a singularfuzzy linear system of equations. In this paper, solving singular fuzzy linearsystems of equations using generalized inverses such as Drazin inverse andpseudo-inverse are investigated.

متن کامل

Lectures on Randomized Numerical Linear Algebra

2 Linear Algebra 3 2.1 Basics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Vector norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.4 Induced matrix norms. . . . . . . . . . . . . . . . . . . . . . . . ....

متن کامل

Stat260/CS294: Randomized Algorithms for Matrices and Data

Here, we will consider one approach for extending the ideas underlying the least-squares algorithm we discussed in class to non-tall matrices. Let A ∈ Rn×d matrix, where both n and d are large, and where rank(A) = k exactly, and let B ∈ Rn×t. Consider the problem minX∈Rn×t‖AX −B‖, where ‖ · ‖ is a unitarily-invariant matrix norm. The solution to this problem is Xopt = AB, and here we consider a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014